Goto

Collaborating Authors

 class learning


HRN: A Holistic Approach to One Class Learning

Neural Information Processing Systems

Existing neural network based one-class learning methods mainly use various forms of auto-encoders or GAN style adversarial training to learn a latent representation of the given one class of data. This paper proposes an entirely different approach based on a novel regularization, called holistic regularization (or H-regularization), which enables the system to consider the data holistically, not to produce a model that biases towards some features. Combined with a proposed 2-norm instance-level data normalization, we obtain an effective one-class learning method, called HRN. To our knowledge, the proposed regularization and the normalization method have not been reported before. Experimental evaluation using both benchmark image classification and traditional anomaly detection datasets show that HRN markedly outperforms the state-of-the-art existing deep/non-deep learning models.


Review for NeurIPS paper: HRN: A Holistic Approach to One Class Learning

Neural Information Processing Systems

Weaknesses: Some aspects that need to be improved are as follows. It is not precise to state that the methods based on auto-encoders, GANS, self-supervised classification are one-class learning approaches. The objective functions in these methods are very different from the one-class learning objective. I would suggest to rephrase statements similar to this one to avoid misleadings. I believe the authors should provide these results in the supplementary materials and add discussions to help readers fully understand the key difference and the insights of the H-regularization.


Review for NeurIPS paper: HRN: A Holistic Approach to One Class Learning

Neural Information Processing Systems

This paper proposes a novel deep one-class classification method where a regularization technique is specially designed for one-class classification problem. It also provides insights on the bottlenecks of previous methods for this problem; one insight is quite novel and has not been considered yet (representation learning from one-class data is biased to the given training data), since previous methods mainly focused on the other (deep network outputs become over-confident given one-class data). I am feeling the paper may further inspire more cleverly designed methods for this problem! While in the beginning the reviewers had some concerns (mainly the clarity and the generality that is related to the significance), the authors did a particularly good job in their rebuttal (showing that the proposal is not limited to a single surrogate loss function). Thus in the end, all of us have agreed to accept this paper for publication! Please carefully address the concerns from all 3 reviewers in the next version.


HRN: A Holistic Approach to One Class Learning

Neural Information Processing Systems

Existing neural network based one-class learning methods mainly use various forms of auto-encoders or GAN style adversarial training to learn a latent representation of the given one class of data. This paper proposes an entirely different approach based on a novel regularization, called holistic regularization (or H-regularization), which enables the system to consider the data holistically, not to produce a model that biases towards some features. Combined with a proposed 2-norm instance-level data normalization, we obtain an effective one-class learning method, called HRN. To our knowledge, the proposed regularization and the normalization method have not been reported before. Experimental evaluation using both benchmark image classification and traditional anomaly detection datasets show that HRN markedly outperforms the state-of-the-art existing deep/non-deep learning models.


Single Class Universum-SVM

Dhar, Sauptik, Cherkassky, Vladimir

arXiv.org Artificial Intelligence

This paper extends the idea of Universum learning [1, 2] to single-class learning problems. We propose Single Class Universum-SVM setting that incorporates a priori knowledge (in the form of additional data samples) into the single class estimation problem. These additional data samples or Universum belong to the same application domain as (positive) data samples from a single class (of interest), but they follow a different distribution. Proposed methodology for single class U-SVM is based on the known connection between binary classification and single class learning formulations [3]. Several empirical comparisons are presented to illustrate the utility of the proposed approach.